7 research outputs found
Efficient simulation of view synchrony
This report presents an algorithm for efficiently simulating view synchrony, including failure-atomic total-order multicast in a discrete-time event simulator. In this report we show how a view synchrony implementation tailored to a simulated environment removes the need for third party middleware and detailed network simulation, thus reducing the complexity of a test environment. An additional advantage is that simulated view synchrony can generate all timing behaviours allowed by the model instead of just those exhibited by a particular view synchrony implementation
Flow Java: Declarative Concurrency for Java
This thesis presents the design, implementation, and evaluation of
Flow Java, a programming language for the implementation of concurrent
programs. Flow Java adds powerful programming abstractions for
automatic synchronization of concurrent programs to Java. The
abstractions added are single assignment variables (logic variables)
and futures (read-only views of logic variables).
The added abstractions conservatively extend Java with respect to
types, parameter passing, and concurrency. Futures support secure
concurrent abstractions and are essential for seamless integration of
single assignment variables into Java. These abstractions allow for
simple and concise implementation of high-level concurrent programming
abstractions.
Flow Java is implemented as a moderate extension to the
GNU gcj/libjava Java compiler and runtime environment. The
extension is not specific to a particular implementation, it could
easily be incorporated into other Java implementations.
The thesis presents three implementation strategies for single
assignment variables. One strategy uses forwarding and dereferencing
while the two others are variants of Taylor's scheme. Taylor's scheme
represents logic variables as a circular list. The thesis presents a
new adaptation of Taylor's scheme to a concurrent language using
operating system threads.
The Flow Java system is evaluated using standard Java
benchmarks. Evaluation shows that in most cases the overhead incurred
by the extensions is between 10% and 50%. For some pathological
cases the runtime increases by up to 150%. Concurrent programs making
use of Flow Java's automatic synchronization, generally perform as
good as corresponding Java programs. In some cases Flow Java programs
outperform Java programs by as much as 33%
BEAMJIT: a just-in-time compiling runtime for Erlang
BEAMJIT is a tracing just-in-time compiling runtime for the Erlang programming language. The core parts of BEAMJIT are synthesized from the C source code of BEAM, the reference Erlang abstract machine. The source code for BEAM's instructions is extracted automatically from BEAM's emulator loop. A tracing version of the abstract machine, as well as a code generator are synthesized. BEAMJIT uses the LLVM toolkit for optimization and native code emission. The automatic synthesis process greatly reduces the amount of manual work required to maintain a just-in-time compiler as it automatically tracks the BEAM system. The performance is evaluated with HiPE's, the Erlang ahead-of-time native compiler, benchmark suite. For most benchmarks BEAMJIT delivers a performance improvement compared to BEAM, although in some cases, with known causes, it fails to deliver a performance boost. BEAMJIT does not yet match the performance of HiPE mainly because it does not yet implement Erlang specific optimizations such as boxing/unboxing elimination and a deep understanding of BIFs. Despite this BEAMJIT, for some benchmarks, reduces the runtime with up to 40\%
Constraint-based Register Allocation and Instruction Scheduling
This paper introduces a constraint model and solving
techniques for code generation in a compiler back-end. It
contributes a new model for global register allocation that
combines several advanced aspects: multiple register banks
(subsuming spilling to memory), coalescing, and packing. The
model is extended to include instruction scheduling and
bundling. The paper introduces a decomposition scheme
exploiting the underlying program structure and exhibiting
robust behavior for functions with thousands of
instructions. Evaluation shows that code quality is on par
with LLVM, a state-of-the-art compiler infrastructure.
The paper makes important contributions to the applicability
of constraint programming as well as compiler construction:
essential concepts are unified in a high-level model that
can be solved by readily available modern solvers. This is a
significant step towards basing code generation entirely on
a high-level model and by this facilitates the construction
of correct, simple, flexible, robust, and high-quality code
generators